2,996 research outputs found

    The Dynamics of Public Opinion in Complex Networks

    Get PDF
    This paper studies the problem of public opinion formation and concentrates on the interplays among three factors: individual attributes, environmental influences and information flow. We present a simple model to analyze the dynamics of four types of networks. Our simulations suggest that regular communities establish not only local consensus, but also global diversity in public opinions. However, when small world networks, random networks, or scale-free networks model social relationships, the results are sensitive to the elasticity coefficient of environmental influences and the average connectivity of the type of network. For example, a community with a higher average connectivity has a higher probability of consensus. Yet, it is misleading to predict results merely based on the characteristic path length of networks. In the process of changing environmental influences and average connectivity, sensitive areas are discovered in the system. By sensitive areas we mean that interior randomness emerges and we cannot predict unequivocally how many opinions will remain upon reaching equilibrium. We also investigate the role of authoritative individuals in information control. While enhancing average connectivity facilitates the diffusion of the authoritative opinion, it makes individuals subject to disturbance from non-authorities as well. Thus, a moderate average connectivity may be preferable because then the public will most likely form an opinion that is parallel with the authoritative one. In a community with a scale-free structure, the influence of authoritative individuals keeps constant with the change of the average connectivity. Provided that the influence of individuals is proportional to the number of their acquaintances, the smallest percentage of authorities is required for a controlled consensus in a scale free network. This study shows that the dynamics of public opinion varies from community to community due to the different degree of impressionability of people and the distinct social network structure of the community.Public Opinion, Complex Network, Consensus, Agent-Based Model

    Statistical methods for the detection, analyses and integration of biomarkers in the human genome and transcriptome

    Get PDF
    Most human diseases have been shown to have a genetic basis that is linked to regulation of gene expression at the transcriptional or post-transcriptional level. In the central dogma of biology, deoxyribonucleic acid (DNA) is transcribed to messenger ribonucleic acid (mRNA), and then translated into proteins; dysfunction in any of these processes may contribute to the development of disease. Sources of such potential irregularities include, but not limited to, the following: point mutations in DNA sequences, copy number alterations (CNAs) and abnormal mRNA and microRNAs (miRNAs) expression. MiRNAs are a type of non-coding RNA that inhibit the transcription and/or translation of specific target mRNAs. Current technologies allow the identification of biomarkers and study of the complex interplay between DNA, mRNA, miRNA and phenotypic variation. This thesis aims to tackle the statistical challenges that have arisen with the application of these technologies to investigate various genomic and transcriptomic alterations. In study I, modified least-variant set normalization for miRNA microarray, a new algorithm and software were developed for microRNA array data normalization. The algorithm selects miRNAs with the least array-to-array variation as the reference set for normalization. The selection process was refined by accounting for the considerable differences in variances between probes. Data are provided to show that this algorithm results in better operating characteristics than other methods. In study II, joint estimation of isoform expression and isoform-specific read distribution using multi-sample RNA-Seq data, a joint model and software were developed to estimate isoform-specific read distribution and gene isoform expression, using RNA-sequencing data from multiple samples. Observation of similarities in the shape of the read distributions solves the problem that the non-uniform read intensity pattern is not identifiable from the data provided by one sample. In study III, integrated molecular portrait of non-small cell lung cancers, molecular markers at the DNA, mRNA and miRNA level that can distinguish between different histopathological subtypes of non-small cell lung cancer were identified. Additionally, using integrated genomic data including CNAs and mRNA and miRNA expression data, three potential driver genes were identified in non-small cell lung cancer, namely MRPS22, NDRG1 and RNF7. Furthermore, a potential driver miRNA, hsa-miR-944, was identified. In study IV, integration of somatic mutation, expression and functional data reveals potential driver genes predictive of breast cancer survival. An analytic pipeline to process large-scale whole-genome and transcriptome sequencing data was created, and an integrative approach based on network enrichment analyses to combine information across different types of omics data was proposed to identify putative cancer driver genes. Analysis of 60 patients with breast cancer provided evidence that patients carrying more mutated potential driver genes had poorer survival

    GEOSPATIAL-BASED ENVIRONMENTAL MODELLING FOR COASTAL DUNE ZONE MANAGEMENT

    Get PDF
    Tomaintain biodiversity and ecological functionof coastal dune areas, itis important that practical and effective environmentalmanagemental strategies are developed. Advances in geospatial technologies offer a potentially very useful source of data for studies in this environment. This research project aimto developgeospatialdata-basedenvironmentalmodellingforcoastaldunecomplexestocontributetoeffectiveconservationstrategieswithparticularreferencetotheBuckroneydunecomplexinCo.Wicklow,Ireland.Theprojectconducteda general comparison ofdifferent geospatial data collection methodsfor topographic modelling of the Buckroney dune complex. These data collection methodsincludedsmall-scale survey data from aerial photogrammetry, optical satellite imagery, radar and LiDAR data, and ground-based, large-scale survey data from Total Station(TS), Real Time Kinematic (RTK) Global Positioning System(GPS), terrestrial laser scanners (TLS) and Unmanned Aircraft Systems (UAS).The results identifiedthe advantages and disadvantages of the respective technologies and demonstrated thatspatial data from high-end methods based on LiDAR, TLS and UAS technologiesenabled high-resolution and high-accuracy 3D datasetto be gathered quickly and relatively easily for the Buckroney dune complex. Analysis of the 3D topographic modelling based on LiDAR, TLS and UAS technologieshighlighted the efficacy of UAS technology, in particular,for 3D topographicmodellingof the study site.Theproject then exploredthe application of a UAS-mounted multispectral sensor for 3D vegetation mappingof the site. The Sequoia multispectral sensorused in this researchhas green, red, red-edge and near-infrared(NIR)wavebands, and a normal RGB sensor. The outcomesincludedan orthomosiac model, a 3D surface model and multispectral imageryof the study site. Nineclassification strategies were usedto examine the efficacyof UAS-IVmounted multispectral data for vegetation mapping. These strategies involved different band combinations based on the three multispectral bands from the RGB sensor, the four multispectral bands from the multispectral sensor and sixwidely used vegetation indices. There were 235 sample areas (1 m × 1 m) used for anaccuracy assessment of the classification of thevegetation mapping. The results showed vegetation type classification accuracies ranging from 52% to 75%. The resultdemonstrated that the addition of UAS-mounted multispectral data improvedthe classification accuracy of coastal vegetation mapping of the Buckroney dune complex

    Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation

    Full text link
    A type description is a succinct noun compound which helps human and machines to quickly grasp the informative and distinctive information of an entity. Entities in most knowledge graphs (KGs) still lack such descriptions, thus calling for automatic methods to supplement such information. However, existing generative methods either overlook the grammatical structure or make factual mistakes in generated texts. To solve these problems, we propose a head-modifier template-based method to ensure the readability and data fidelity of generated type descriptions. We also propose a new dataset and two automatic metrics for this task. Experiments show that our method improves substantially compared with baselines and achieves state-of-the-art performance on both datasets.Comment: ACL 201

    Content adaptive sparse illumination for Fourier ptychography

    Full text link
    Fourier Ptychography (FP) is a recently proposed technique for large field of view and high resolution imaging. Specifically, FP captures a set of low resolution images under angularly varying illuminations and stitches them together in Fourier domain. One of FP's main disadvantages is its long capturing process due to the requisite large number of incident illumination angles. In this letter, utilizing the sparsity of natural images in Fourier domain, we propose a highly efficient method termed as AFP, which applies content adaptive sparse illumination for Fourier ptychography by capturing the most informative parts of the scene's spatial spectrum. We validate the effectiveness and efficiency of the reported framework with both simulations and real experiments. Results show that the proposed AFP could shorten the acquisition time of conventional FP by around 30%-60%

    Fourier ptychographic reconstruction using Poisson maximum likelihood and truncated Wirtinger gradient

    Get PDF
    Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for error removal. Results on both simulated data and real data captured using our laser FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use
    corecore